Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 76
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
iScience ; 27(2): 108915, 2024 Feb 16.
Artigo em Inglês | MEDLINE | ID: mdl-38318347

RESUMO

The anterior insular cortex, a central node of the salience network, plays a critical role in cognitive control and attention. Here, we investigated the feasibility of enhancing attention using real-time fMRI neurofeedback training that targets the right anterior insular cortex (rAIC). 56 healthy adults underwent two neurofeedback training sessions. The experimental group received feedback from neural responses in the rAIC, while control groups received sham feedback from the primary visual cortex or no feedback. Cognitive functioning was evaluated before, immediately after, and three months post-training. Our results showed that only the rAIC neurofeedback group successfully increased activity in the rAIC. Furthermore, this group showed enhanced attention-related alertness up to three months after the training. Our findings provide evidence for the potential of rAIC neurofeedback as a viable approach for enhancing attention-related alertness, which could pave the way for non-invasive therapeutic strategies to address conditions characterized by attention deficits.

2.
Proc Natl Acad Sci U S A ; 121(10): e2316306121, 2024 Mar 05.
Artigo em Inglês | MEDLINE | ID: mdl-38408255

RESUMO

Music is powerful in conveying emotions and triggering affective brain mechanisms. Affective brain responses in previous studies were however rather inconsistent, potentially because of the non-adaptive nature of recorded music used so far. Live music instead can be dynamic and adaptive and is often modulated in response to audience feedback to maximize emotional responses in listeners. Here, we introduce a setup for studying emotional responses to live music in a closed-loop neurofeedback setup. This setup linked live performances by musicians to neural processing in listeners, with listeners' amygdala activity was displayed to musicians in real time. Brain activity was measured using functional MRI, and especially amygdala activity was quantified in real time for the neurofeedback signal. Live pleasant and unpleasant piano music performed in response to amygdala neurofeedback from listeners was acoustically very different from comparable recorded music and elicited significantly higher and more consistent amygdala activity. Higher activity was also found in a broader neural network for emotion processing during live compared to recorded music. This finding included observations of the predominance for aversive coding in the ventral striatum while listening to unpleasant music, and involvement of the thalamic pulvinar nucleus, presumably for regulating attentional and cortical flow mechanisms. Live music also stimulated a dense functional neural network with the amygdala as a central node influencing other brain systems. Finally, only live music showed a strong and positive coupling between features of the musical performance and brain activity in listeners pointing to real-time and dynamic entrainment processes.


Assuntos
Música , Música/psicologia , Encéfalo/fisiologia , Emoções/fisiologia , Tonsila do Cerebelo/fisiologia , Afeto , Imageamento por Ressonância Magnética , Percepção Auditiva/fisiologia
3.
Brain Stimul ; 17(1): 112-124, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38272256

RESUMO

BACKGROUND: DBS of the subthalamic nucleus (STN) considerably ameliorates cardinal motor symptoms in PD. Reported STN-DBS effects on secondary dysarthric (speech) and dysphonic symptoms (voice), as originating from vocal tract motor dysfunctions, are however inconsistent with rather deleterious outcomes based on post-surgical assessments. OBJECTIVE: To parametrically and intra-operatively investigate the effects of deep brain stimulation (DBS) on perceptual and acoustic speech and voice quality in Parkinson's disease (PD) patients. METHODS: We performed an assessment of instantaneous intra-operative speech and voice quality changes in PD patients (n = 38) elicited by direct STN stimulations with variations of central stimulation features (depth, laterality, and intensity), separately for each hemisphere. RESULTS: First, perceptual assessments across several raters revealed that certain speech and voice symptoms could be improved with STN-DBS, but this seems largely restricted to right STN-DBS. Second, computer-based acoustic analyses of speech and voice features revealed that both left and right STN-DBS could improve dysarthric speech symptoms, but only right STN-DBS can considerably improve dysphonic symptoms, with left STN-DBS being restricted to only affect voice intensity features. Third, several subareas according to stimulation depth and laterality could be identified in the motoric STN proper and close to the associative STN with optimal (and partly suboptimal) stimulation outcomes. Fourth, low-to-medium stimulation intensities showed the most optimal and balanced effects compared to high intensities. CONCLUSIONS: STN-DBS can considerably improve both speech and voice quality based on a carefully arranged stimulation regimen along central stimulation features.


Assuntos
Estimulação Encefálica Profunda , Disfonia , Doença de Parkinson , Núcleo Subtalâmico , Humanos , Fala , Qualidade da Voz/fisiologia , Doença de Parkinson/complicações , Doença de Parkinson/terapia , Núcleo Subtalâmico/fisiologia
4.
Behav Res Methods ; 2023 Oct 04.
Artigo em Inglês | MEDLINE | ID: mdl-37794208

RESUMO

All animals have to respond to immediate threats in order to survive. In non-human animals, a diversity of sophisticated behaviours has been observed, but research in humans is hampered by ethical considerations. Here, we present a novel immersive VR toolkit for the Unity engine that allows assessing threat-related behaviour in single, semi-interactive, and semi-realistic threat encounters. The toolkit contains a suite of fully modelled naturalistic environments, interactive objects, animated threats, and scripted systems. These are arranged together by the researcher as a means of creating an experimental manipulation, to form a series of independent "episodes" in immersive VR. Several specifically designed tools aid the design of these episodes, including a system to allow for pre-sequencing the movement plans of animal threats. Episodes can be built with the assets included in the toolkit, but also easily extended with custom scripts, threats, and environments if required. During the experiments, the software stores behavioural, movement, and eye tracking data. With this software, we aim to facilitate the use of immersive VR in human threat avoidance research and thus to close a gap in the understanding of human behaviour under threat.

5.
Artigo em Inglês | MEDLINE | ID: mdl-37210564

RESUMO

BACKGROUND: Individuals with a history of child maltreatment (CM) are more often disliked, rejected and victimized compared to individuals without such experiences. However, contributing factors for these negative evaluations are so far unknown. OBJECTIVE: Based on previous research on adults with borderline personality disorder (BPD), this preregistered study assessed whether negative evaluations of adults with CM experiences, in comparison to unexposed controls, are mediated by more negative and less positive facial affect display. Additionally, it was explored whether level of depression, severity of CM, social anxiety, social support, and rejection sensitivity have an influence on ratings. METHODS: Forty adults with CM experiences (CM +) and 40 non-maltreated (CM-) adults were filmed for measurement of affect display and rated in likeability, trustworthiness, and cooperativeness by 100 independent raters after zero-acquaintance (no interaction) and 17 raters after first-acquaintance (short conversation). RESULTS: The CM + and the CM- group were neither evaluated significantly different, nor showed significant differences in affect display. Contrasting previous research, higher levels of BPD symptoms predicted higher likeability ratings (p = .046), while complex post-traumatic stress disorder symptoms had no influence on ratings. CONCLUSIONS: The non-significant effects could be attributed to an insufficient number of participants, as our sample size allowed us to detect effects with medium effect sizes (f2 = .16 for evaluation; f2 = .17 for affect display) with a power of .95. Moreover, aspects such as the presence of mental disorders (e.g., BPD or post-traumatic stress disorder), might have a stronger impact than CM per se. Future research should thus further explore conditions (e.g., presence of specific mental disorders) under which individuals with CM are affected by negative evaluations as well as factors that contribute to negative evaluations and problems in social relationships.

6.
J Acoust Soc Am ; 153(1): 384, 2023 01.
Artigo em Inglês | MEDLINE | ID: mdl-36732275

RESUMO

Fear is a frequently studied emotion category in music and emotion research. However, research in music theory suggests that music can convey finer-grained subtypes of fear, such as terror and anxiety. Previous research on musically expressed emotions has neglected to investigate subtypes of fearful emotions. This study seeks to fill this gap in the literature. To that end, 99 participants rated the emotional impression of short excerpts of horror film music predicted to convey terror and anxiety, respectively. Then, the excerpts that most effectively conveyed these target emotions were analyzed descriptively and acoustically to demonstrate the sonic differences between musically conveyed terror and anxiety. The results support the hypothesis that music conveys terror and anxiety with markedly different musical structures and acoustic features. Terrifying music has a brighter, rougher, harsher timbre, is musically denser, and may be faster and louder than anxious music. Anxious music has a greater degree of loudness variability. Both types of fearful music tend towards minor modalities and are rhythmically unpredictable. These findings further support the application of emotional granularity in music and emotion research.


Assuntos
Medo , Música , Humanos , Medo/psicologia , Emoções , Música/psicologia , Acústica , Inquéritos e Questionários
7.
Cereb Cortex ; 33(4): 1170-1185, 2023 02 07.
Artigo em Inglês | MEDLINE | ID: mdl-35348635

RESUMO

Voice signaling is integral to human communication, and a cortical voice area seemed to support the discrimination of voices from other auditory objects. This large cortical voice area in the auditory cortex (AC) was suggested to process voices selectively, but its functional differentiation remained elusive. We used neuroimaging while humans processed voices and nonvoice sounds, and artificial sounds that mimicked certain voice sound features. First and surprisingly, specific auditory cortical voice processing beyond basic acoustic sound analyses is only supported by a very small portion of the originally described voice area in higher-order AC located centrally in superior Te3. Second, besides this core voice processing area, large parts of the remaining voice area in low- and higher-order AC only accessorily process voices and might primarily pick up nonspecific psychoacoustic differences between voices and nonvoices. Third, a specific subfield of low-order AC seems to specifically decode acoustic sound features that are relevant but not exclusive for voice detection. Taken together, the previously defined voice area might have been overestimated since cortical support for human voice processing seems rather restricted. Cortical voice processing also seems to be functionally more diverse and embedded in broader functional principles of the human auditory system.


Assuntos
Córtex Auditivo , Voz , Humanos , Estimulação Acústica/métodos , Percepção Auditiva , Som , Imageamento por Ressonância Magnética/métodos
8.
iScience ; 25(12): 105711, 2022 Dec 22.
Artigo em Inglês | MEDLINE | ID: mdl-36578321

RESUMO

Speech comprehension counts as a benchmark outcome of cochlear implants (CIs)-disregarding the communicative importance of efficient integration of audiovisual (AV) socio-emotional information. We investigated effects of time-synchronized facial information on vocal emotion recognition (VER). In Experiment 1, 26 CI users and normal-hearing (NH) individuals classified emotions for auditory-only, AV congruent, or AV incongruent utterances. In Experiment 2, we compared crossmodal effects between groups with adaptive testing, calibrating auditory difficulty via voice morphs from emotional caricatures to anti-caricatures. CI users performed lower than NH individuals, and VER was correlated with life quality. Importantly, they showed larger benefits to VER with congruent facial emotional information even at equal auditory-only performance levels, suggesting that their larger crossmodal benefits result from deafness-related compensation rather than degraded acoustic representations. Crucially, vocal caricatures enhanced CI users' VER. Findings advocate AV stimuli during CI rehabilitation and suggest perspectives of caricaturing for both perceptual trainings and sound processor technology.

9.
Transl Psychiatry ; 12(1): 494, 2022 11 29.
Artigo em Inglês | MEDLINE | ID: mdl-36446775

RESUMO

Psychopathy is associated with severe deviations in social behavior and cognition. While previous research described such cognitive and neural alterations in the processing of rather specific social information from human expressions, some open questions remain concerning central and differential neurocognitive deficits underlying psychopathic behavior. Here we investigated three rather unexplored factors to explain these deficits, first, by assessing psychopathy subtypes in social cognition, second, by investigating the discrimination of social communication sounds (speech, non-speech) from other non-social sounds, and third, by determining the neural overlap in social cognition impairments with autistic traits, given potential common deficits in the processing of communicative voice signals. The study was exploratory with a focus on how psychopathic and autistic traits differentially influence the function of social cognitive and affective brain networks in response to social voice stimuli. We used a parametric data analysis approach from a sample of 113 participants (47 male, 66 female) with ages ranging between 18 and 40 years (mean 25.59, SD 4.79). Our data revealed four important findings. First, we found a phenotypical overlap between secondary but not primary psychopathy with autistic traits. Second, primary psychopathy showed various neural deficits in neural voice processing nodes (speech, non-speech voices) and in brain systems for social cognition (mirroring, mentalizing, empathy, emotional contagion). Primary psychopathy also showed deficits in the basal ganglia (BG) system that seems specific to the social decoding of communicative voice signals. Third, neural deviations in secondary psychopathy were restricted to social mirroring and mentalizing impairments, but with additional and so far undescribed deficits at the level of auditory sensory processing, potentially concerning deficits in ventral auditory stream mechanisms (auditory object identification). Fourth, high autistic traits also revealed neural deviations in sensory cortices, but rather in the dorsal auditory processing streams (communicative context encoding). Taken together, social cognition of voice signals shows considerable deviations in psychopathy, with differential and newly described deficits in the BG system in primary psychopathy and at the neural level of sensory processing in secondary psychopathy. These deficits seem especially triggered during the social cognition from vocal communication signals.


Assuntos
Transtorno Autístico , Voz , Humanos , Feminino , Masculino , Adolescente , Adulto Jovem , Adulto , Cognição Social , Comunicação , Fala
10.
Sci Rep ; 12(1): 16754, 2022 10 06.
Artigo em Inglês | MEDLINE | ID: mdl-36202849

RESUMO

Emotional prosody perception (EPP) unfolds in time given the intrinsic temporal nature of auditory stimuli, and has been shown to be modulated by spatial attention. Yet, the influence of temporal attention (TA) on EPP remains largely unexplored. TA studies manipulate subject's motor preparedness according to an upcoming event, with targets to discriminate during short attended trials arriving quickly, and, targets to discriminate during long unattended trials arriving at a later time point. We used here a classic paradigm manipulating TA to investigate its influence on behavioral responses to EPP (n = 100) and we found that TA bias was associated with slower reaction times (RT) for angry but not neutral prosodies and only during short trials. Importantly, TA biases were observed for accuracy measures only for angry voices and especially during short trials, suggesting that neutral stimuli are less subject to TA biases. Importantly, emotional facilitation, with faster RTs for angry voices in comparison to neutral voices, was observed when the stimuli were temporally attended and during short trials, suggesting an influential role of TA during EPP. Together, these results demonstrate for the first time the major influence of TA in RTs and behavioral performance while discriminating emotional prosody.


Assuntos
Emoções , Percepção da Fala , Estimulação Acústica/métodos , Ira/fisiologia , Percepção Auditiva/fisiologia , Viés , Emoções/fisiologia , Percepção , Percepção da Fala/fisiologia
11.
Front Psychol ; 13: 866613, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35795412

RESUMO

Research over the past few decades has shown the positive influence that cognitive, social, and physical activities have on older adults' cognitive and affective health. Especially interventions in health-related behaviors, such as cognitive activation, physical activity, social activity, nutrition, mindfulness, and creativity, have shown to be particularly beneficial. Whereas most intervention studies apply unimodal interventions, such as cognitive training (CT), this study investigates the potential to foster cognitive and affective health factors of older adults by means of an autonomy-supportive multimodal intervention (MMI). The intervention integrates everyday life recommendations for six evidence-based areas combined with psychoeducational information. This randomized controlled trial study compares the effects of a MMI and CT on those of a waiting control group (WCG) on cognitive and affective factors, everyday life memory performance, and activity in everyday life. Three groups, including a total of 119 adults aged 65-86 years, attended a 5- or 10-week intervention. Specifically, one group completed a 10-week MMI, the second group completed 5-week of computer-based CT followed by a 5-week MMI, whereas the third group paused before completing the MMI for the last 5 weeks. All participants completed online surveys and cognitive tests at three test points. The findings showed an increase in the number and variability of activities in the everyday lives of all participants. Post hoc analysis on cognitive performance of MMI to CT indicate similar (classic memory and attention) or better (working memory) effects. Furthermore, results on far transfer variables showed interesting trends in favor of the MMI, such as increased well-being and attitude toward the aging brain. Also, the MMI group showed the biggest perceived improvements out of all groups for all self-reported personal variables (memory in everyday life and stress). The results implicate a positive trend toward MMI on cognitive and affective factors of older adults. These tendencies show the potential of a multimodal approach compared to training a specific cognitive function. Moreover, the findings suggest that information about MMI motivates participants to increase activity variability and frequency in everyday life. Finally, the results could also have implications for the primary prevention of neurocognitive deficits and degenerative diseases.

12.
Prog Neurobiol ; 214: 102278, 2022 07.
Artigo em Inglês | MEDLINE | ID: mdl-35513165

RESUMO

Affect signaling in human communication involves cortico-limbic brain systems for affect information decoding, such as expressed in vocal intonations during affective speech. Both, the affecto-acoustic speech profile of speakers and the cortico-limbic affect recognition network of listeners were previously identified using non-social and non-adaptive research protocols. However, these protocols neglected the inherent socio-dyadic nature of affective communication, thus underestimating the real-time adaptive dynamics of affective speech that maximize listeners' neural effects and affect recognition. To approximate this socio-adaptive and neural context of affective communication, we used an innovative real-time neuroimaging setup that linked speakers' live affective speech production with listeners' limbic brain signals that served as a proxy for affect recognition. We show that affective speech communication is acoustically more distinctive, adaptive, and individualized in a live adaptive setting and more efficiently capitalizes on neural affect decoding mechanisms in limbic and associated networks than non-adaptive affective speech communication. Only live affective speech produced in adaption to listeners' limbic signals was closely linked to their emotion recognition as quantified by speakers' acoustics and listeners' emotional rating correlations. Furthermore, while live and adaptive aggressive speaking directly modulated limbic activity in listeners, joyful speaking modulated limbic activity in connection with the ventral striatum that is, amongst others, involved in the processing of pleasure. Thus, evolved neural mechanisms for affect decoding seem largely optimized for interactive and individually adaptive communicative contexts.


Assuntos
Fala , Voz , Agressão , Comunicação , Emoções , Humanos
13.
Ear Hear ; 43(4): 1178-1188, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-34999594

RESUMO

OBJECTIVES: Research on cochlear implants (CIs) has focused on speech comprehension, with little research on perception of vocal emotions. We compared emotion perception in CI users and normal-hearing (NH) individuals, using parameter-specific voice morphing. DESIGN: Twenty-five CI users and 25 NH individuals (matched for age and gender) performed fearful-angry discriminations on bisyllabic pseudoword stimuli from morph continua across all acoustic parameters (Full), or across selected parameters (F0, Timbre, or Time information), with other parameters set to a noninformative intermediate level. RESULTS: Unsurprisingly, CI users as a group showed lower performance in vocal emotion perception overall. Importantly, while NH individuals used timbre and fundamental frequency (F0) information to equivalent degrees, CI users were far more efficient in using timbre (compared to F0) information for this task. Thus, under the conditions of this task, CIs were inefficient in conveying emotion based on F0 alone. There was enormous variability between CI users, with low performers responding close to guessing level. Echoing previous research, we found that better vocal emotion perception was associated with better quality of life ratings. CONCLUSIONS: Some CI users can utilize timbre cues remarkably well when perceiving vocal emotions.


Assuntos
Implante Coclear , Implantes Cocleares , Música , Percepção da Fala , Estimulação Acústica , Percepção Auditiva , Emoções , Humanos , Qualidade de Vida
14.
Front Psychol ; 12: 675956, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34484034

RESUMO

Mugbook searches are conducted in case a suspect is not known and to assess if a previously convicted person might be recognized as a potential culprit. The goal of the two experiments reported here was to analyze if prior statements and information about the suspect can aid in the evaluation if such a mugbook search is subsequently advised or not. In experiment 1, memory accuracy for person descriptors was tested in order to analyze, which attributes could be chosen to down-scale the mugbook prior to testing. Results showed that age was the most accurate descriptor, followed by ethnicity and height. At the same time self-assessed low subjective accuracy of culprit descriptions by the witness seemed to be divergent to the objective actual performance accuracy. In experiment 2, a mugbook search was conducted after participants viewed a video of a staged crime and gave a description of the culprit. Results showed that accuracy in mugbook searches correlated positively with the total number of person descriptors given by the witness as well as with witness' description of external facial features. Predictive confidence (i.e., subjective rating of own performance in the subsequent mugbook search), however did not show any relation to the identification accuracy in the actual mugbook search. These results highlight the notion that mugbooks should not be conducted according to the subjective estimation of the witness' performance but more according to the actual statements and descriptions that the witness can give about the culprit.

15.
Behav Brain Sci ; 44: e118, 2021 09 30.
Artigo em Inglês | MEDLINE | ID: mdl-34588032

RESUMO

The credible signaling theory underexplains the evolutionary added value of less-credible affective musical signals compared to vocal signals. The theory might be extended to account for the motivation for, and consequences of, culturally decontextualizing a biologically contextualized signal. Musical signals are twofold, communicating "emotional fiction" alongside biological meaning, and could have filled an adaptive need for affect induction during storytelling.


Assuntos
Música , Evolução Biológica , Comunicação , Emoções , Humanos
16.
Commun Biol ; 4(1): 801, 2021 06 25.
Artigo em Inglês | MEDLINE | ID: mdl-34172824

RESUMO

The temporal voice areas (TVAs) in bilateral auditory cortex (AC) appear specialized for voice processing. Previous research assumed a uniform functional profile for the TVAs which are broadly spread along the bilateral AC. Alternatively, the TVAs might comprise separate AC nodes controlling differential neural functions for voice and speech decoding, organized as local micro-circuits. To investigate micro-circuits, we modeled the directional connectivity between TVA nodes during voice processing in humans while acquiring brain activity using neuroimaging. Results show several bilateral AC nodes for general voice decoding (speech and non-speech voices) and for speech decoding in particular. Furthermore, non-hierarchical and differential bilateral AC networks manifest distinct excitatory and inhibitory pathways for voice and speech processing. Finally, while voice and speech processing seem to have distinctive but integrated neural circuits in the left AC, the right AC reveals disintegrated neural circuits for both sounds. Altogether, we demonstrate a functional heterogeneity in the TVAs for voice decoding based on local micro-circuits.


Assuntos
Córtex Auditivo/fisiologia , Percepção Auditiva/fisiologia , Rede Nervosa , Percepção da Fala/fisiologia , Adolescente , Adulto , Feminino , Humanos , Masculino , Voz , Adulto Jovem
17.
Brain Behav ; 11(6): e02140, 2021 06.
Artigo em Inglês | MEDLINE | ID: mdl-33951323

RESUMO

BACKGROUND AND OBJECTIVE: Although avatars are now widely used in advertisement, entertainment, and business today, no study has investigated whether brain lesions in neurological patients interfere with brain activation in response to dynamic avatar facial expressions. The aim of our event-related fMRI study was to compare brain activation differences in people with epilepsy and controls during the processing of fearful and neutral dynamic expressions displayed by human or avatar faces. METHODS: Using functional magnetic resonance imaging (fMRI), we examined brain responses to dynamic facial expressions of trained actors and their avatar look-alikes in 16 people with temporal lobe epilepsy (TLE) and 26 controls. The actors' fearful and neutral expressions were recorded on video and conveyed onto their avatar look-alikes by face tracking. RESULTS: Our fMRI results show that people with TLE exhibited reduced response differences between fearful and neutral expressions displayed by humans in the right amygdala and the left superior temporal sulcus (STS). Further, TLE was associated with reduced response differences between human and avatar fearful expressions in the dorsal pathway of the face perception network (STS and inferior frontal gyrus) as well as in the medial prefrontal cortex. CONCLUSIONS: Taken together, these findings suggest that brain responses to dynamic facial expressions are altered in people with TLE compared to neurologically healthy individuals-regardless of whether the face is human or computer-generated. In TLE, areas sensitive to dynamic facial features and associated with processes relating to the self and others are particularly affected when processing dynamic human and avatar expressions. Our findings highlight that the impact of TLE on facial emotion processing must be extended to artificial faces and should be considered when applying dynamic avatars in the context of neurological conditions.


Assuntos
Epilepsia do Lobo Temporal , Reconhecimento Facial , Mapeamento Encefálico , Emoções , Epilepsia do Lobo Temporal/diagnóstico por imagem , Expressão Facial , Humanos , Imageamento por Ressonância Magnética
18.
Sci Rep ; 11(1): 10645, 2021 05 20.
Artigo em Inglês | MEDLINE | ID: mdl-34017050

RESUMO

Until recently, brain networks underlying emotional voice prosody decoding and processing were focused on modulations in primary and secondary auditory, ventral frontal and prefrontal cortices, and the amygdala. Growing interest for a specific role of the basal ganglia and cerebellum was recently brought into the spotlight. In the present study, we aimed at characterizing the role of such subcortical brain regions in vocal emotion processing, at the level of both brain activation and functional and effective connectivity, using high resolution functional magnetic resonance imaging. Variance explained by low-level acoustic parameters (fundamental frequency, voice energy) was also modelled. Wholebrain data revealed expected contributions of the temporal and frontal cortices, basal ganglia and cerebellum to vocal emotion processing, while functional connectivity analyses highlighted correlations between basal ganglia and cerebellum, especially for angry voices. Seed-to-seed and seed-to-voxel effective connectivity revealed direct connections within the basal ganglia-especially between the putamen and external globus pallidus-and between the subthalamic nucleus and the cerebellum. Our results speak in favour of crucial contributions of the basal ganglia, especially the putamen, external globus pallidus and subthalamic nucleus, and several cerebellar lobules and nuclei for an efficient decoding of and response to vocal emotions.


Assuntos
Gânglios da Base/diagnóstico por imagem , Cerebelo/diagnóstico por imagem , Emoções/fisiologia , Imageamento por Ressonância Magnética , Voz/fisiologia , Estimulação Acústica , Acústica , Adulto , Feminino , Humanos , Masculino , Rede Nervosa/fisiologia
19.
PLoS Biol ; 19(4): e3000751, 2021 04.
Artigo em Inglês | MEDLINE | ID: mdl-33848299

RESUMO

Across many species, scream calls signal the affective significance of events to other agents. Scream calls were often thought to be of generic alarming and fearful nature, to signal potential threats, with instantaneous, involuntary, and accurate recognition by perceivers. However, scream calls are more diverse in their affective signaling nature than being limited to fearfully alarming a threat, and thus the broader sociobiological relevance of various scream types is unclear. Here we used 4 different psychoacoustic, perceptual decision-making, and neuroimaging experiments in humans to demonstrate the existence of at least 6 psychoacoustically distinctive types of scream calls of both alarming and non-alarming nature, rather than there being only screams caused by fear or aggression. Second, based on perceptual and processing sensitivity measures for decision-making during scream recognition, we found that alarm screams (with some exceptions) were overall discriminated the worst, were responded to the slowest, and were associated with a lower perceptual sensitivity for their recognition compared with non-alarm screams. Third, the neural processing of alarm compared with non-alarm screams during an implicit processing task elicited only minimal neural signal and connectivity in perceivers, contrary to the frequent assumption of a threat processing bias of the primate neural system. These findings show that scream calls are more diverse in their signaling and communicative nature in humans than previously assumed, and, in contrast to a commonly observed threat processing bias in perceptual discriminations and neural processes, we found that especially non-alarm screams, and positive screams in particular, seem to have higher efficiency in speeded discriminations and the implicit neural processing of various scream types in humans.


Assuntos
Percepção Auditiva/fisiologia , Discriminação Psicológica/fisiologia , Medo/psicologia , Reconhecimento de Voz/fisiologia , Adulto , Vias Auditivas/diagnóstico por imagem , Vias Auditivas/fisiologia , Encéfalo/diagnóstico por imagem , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Reconhecimento Fisiológico de Modelo/fisiologia , Reconhecimento Psicológico/fisiologia , Caracteres Sexuais , Adulto Jovem
20.
Hum Brain Mapp ; 42(5): 1503-1517, 2021 04 01.
Artigo em Inglês | MEDLINE | ID: mdl-33615612

RESUMO

Voice signals are relevant for auditory communication and suggested to be processed in dedicated auditory cortex (AC) regions. While recent reports highlighted an additional role of the inferior frontal cortex (IFC), a detailed description of the integrated functioning of the AC-IFC network and its task relevance for voice processing is missing. Using neuroimaging, we tested sound categorization while human participants either focused on the higher-order vocal-sound dimension (voice task) or feature-based intensity dimension (loudness task) while listening to the same sound material. We found differential involvements of the AC and IFC depending on the task performed and whether the voice dimension was of task relevance or not. First, when comparing neural vocal-sound processing of our task-based with previously reported passive listening designs we observed highly similar cortical activations in the AC and IFC. Second, during task-based vocal-sound processing we observed voice-sensitive responses in the AC and IFC whereas intensity processing was restricted to distinct AC regions. Third, the IFC flexibly adapted to the vocal-sounds' task relevance, being only active when the voice dimension was task relevant. Forth and finally, connectivity modeling revealed that vocal signals independent of their task relevance provided significant input to bilateral AC. However, only when attention was on the voice dimension, we found significant modulations of auditory-frontal connections. Our findings suggest an integrated auditory-frontal network to be essential for behaviorally relevant vocal-sounds processing. The IFC seems to be an important hub of the extended voice network when representing higher-order vocal objects and guiding goal-directed behavior.


Assuntos
Atenção/fisiologia , Córtex Auditivo/fisiologia , Percepção Auditiva/fisiologia , Conectoma , Rede Nervosa/fisiologia , Córtex Pré-Frontal/fisiologia , Adulto , Córtex Auditivo/diagnóstico por imagem , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Rede Nervosa/diagnóstico por imagem , Córtex Pré-Frontal/diagnóstico por imagem , Percepção Social , Percepção da Fala/fisiologia , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...